Goto

Collaborating Authors

 average acceptance rate



Appendices

Neural Information Processing Systems

Additionally, to avoid gradients with infinite means even if DL is not contractive, we consider a spectral normalisation, so that instead of computing recursively η0 = ε and ηk = DLηk 1 for k {1,...,N},weset η0 =εand The motivation was to have a quadratic increase for the penalty term if the largest absolute eigenvalue approaches 1, and then smoothly switch to a linear function for values larger than δ2. The suggested approach can perform poorly for non-convex potentials or even convex potentials such as arsing in a logistic regression model for some data sets. The idea now is to run HMC with unit mass matrix for the transformed variables z = f 1(q) where q π. Hessian-vector products can similarly be computed using vector-Jacobian products: With g(z) = grad( U,z), we then compute 2 U(z)w = vjp(g,z,w)> for z = f 1(stop grad(f(zbL/2c)). We also stop all U gradients, i.e.



Appendices A Gradient terms for the adaptation scheme

Neural Information Processing Systems

A.1 Gradients for the entropy approximation Following the arguments in [13], we can compute the gradient of the term in (13) using θ A.2 Gradients for the penalty function We used the following penalty function h( x) = ( x δ) A.3 Gradients for the energy error We can write the energy error as (q We generalise the arguments from [14], Lemma 7. Proceeding by induction over n, we have for the case n = 1, for any v R The suggested approach can perform poorly for non-convex potentials or even convex potentials such as arsing in a logistic regression model for some data sets. We illustrate here how to learn a reasonable proposal for a general potential function by considering some version of position-dependent preconditioning. The transformation f as well as U generally depend on some parameters θ that we again omit for a less convoluted notation. Our approach can be seen as an alternative for instance to [31] where such a transformation is first learned by trying to approximate π with a standard Gaussian density using variational inference, while the HMC hyperparameters are adapted in a second step using Bayesian optimisation. The motivation for stopping the gradients comes from considering the special case f: z null Cz that corresponds to the position-independent preconditioning scheme above.


Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient

Dinh, Vu C., Ho, Lam Si Tung, Nguyen, Cuong V.

arXiv.org Machine Learning

We analyze the error rates of the Hamiltonian Monte Carlo algorithm with leapfrog integrator for Bayesian neural network inference. We show that due to the non-differentiability of activation functions in the ReLU family, leapfrog HMC for networks with these activation functions has a large local error rate of $\Omega(\epsilon)$ rather than the classical error rate of $O(\epsilon^3)$. This leads to a higher rejection rate of the proposals, making the method inefficient. We then verify our theoretical findings through empirical simulations as well as experiments on a real-world dataset that highlight the inefficiency of HMC inference on ReLU-based neural networks compared to analytical networks.


Algebraic Geometrical Analysis of Metropolis Algorithm When Parameters Are Non-identifiable

Nagata, Kenji, Mototake, Yoh-ichi

arXiv.org Machine Learning

The Metropolis algorithm is one of the Markov chain Monte Carlo (MCMC) methods that realize sampling from the target probability distribution. In this paper, we are concerned with the sampling from the distribution in non-identifiable cases that involve models with Fisher information matrices that may fail to be invertible. The theoretical adjustment of the step size, which is the variance of the candidate distribution, is difficult for non-identifiable cases. In this study, to establish such a principle, the average acceptance rate, which is used as a guideline to optimize the step size in the MCMC method, was analytically derived in non-identifiable cases. The optimization principle for the step size was developed from the viewpoint of the average acceptance rate. In addition, we performed numerical experiments on some specific target distributions to verify the effectiveness of our theoretical results.